perm filename SEARLE.RE1[E79,JMC]2 blob
sn#481807 filedate 1979-10-09 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "memo.pub[let,jmc]" source
C00023 ENDMK
C⊗;
.require "memo.pub[let,jmc]" source
.once center
%3Notes on Searle's "Notes on artificial intelligence"%1
(These notes are dependent on posessing the paper, and part
of them were written on the basis of the first draft).
I think that suitable computer programs can %2understand%1. However,
such a program is not a theory in itself any more than a man who
understands is a theory. Admittedly there is a bobtailed theory
of understanding based on any program or man who understands. The
theory is %2understanding is the internal structure required to
account for the behavior of the program or man%1. In the case of
the program, the reader is invited to read the listing, and in the
case of the man, the reader is invited to do physiology. Such theories
of understand won't be very illuminating unless the program is
more comprehensible than any of today's substantial program or unless
physiology becomes much easier than it has been both technically and
conceptually.
It is much more reasonable to regard a program that understands
as an illustration of a theory of understanding and an existence proof
that the form of %2understanding%1 built into the program meets some
of our intuitive desiderata or at least meets the empirical claims of
its designer. To serve as such an illustration, the program must be
accompanied by an explanation of how it embodies the concept of
understanding being illustrated.
It is conceivable that someone will make a program that
understands without being able to explain it satisfactorily. I
think that is unlikely and would be very disappointed should it
turn out that by tinkering humanity can produce artificial
intelligence but cannot understand intelligence even with the
aid of the intelligent programs themselves.
Therefore, it would be better for me if Searle would split
his stronger claim about AI into two versions - one that says
programs can potentially understand and the other that such a
program would constitute a theory of understanding all by itself.
1a I have reservations about Schank's program and also about
Schank's doctrines about what constitutes a theory of AI.
I am doubtful that Schank's program understands, but I don't
have a formal definition of understanding with which to compare it.
I am not convinced by the dialogs cited, and I think I could invent
other questions using the same vocabulary that would buffalo the
program. I don't know if the program would be able to answer whether
the pleased man would have accepted an offer from the waitress to eat the
hamburger again or whether the offended man would have accepted an
offer to uncook the hamburger. I am, however, uncertain whether
these questions are really criterial for understanding about restaurants.
What if the program were asked, "Might the offended man have decided
that he would never eat again". I choose these outlandish questions,
because I suspect that the program, like many AI programs, is fragile.
It seems to make sense if you respect it, but if you are skeptical
enough, you can elicit ridiculous behavior. Unfortunately, this
off-the-cuff skepticism is not based on having read enough of Schank's
papers well enough to have a fully informed opinion. It would be
worthwhile to have the opinion of someone who has read more, and,
if doubt remains, to perform the experiment.
(I remember a ⊗go program that made rather good moves if you made
reasonable opening moves, but collapsed if you launched unsound
direct attacks on its groups. This collapse contrasts with the
behavior of even the microcomputer chess games that will punish
sufficiently unsound play without regard to any expectations of
sound play).
There is a level of performance that would leave me betting
that Schank's program really understands, but without more of a
theory of understanding, I wouldn't easily be finally convinced.
I like the Berkeley answer that the system knows and will defend it against
Searle's objections.
1. Suppose that the human doesn't know Chinese, only the set of rules for
making responses to the strings of Chinese characters. Suppose further that
the rules are elaborate enough to behave like an excellent Chinese scholar.
I would then say that this system, though not the "underlying mental system"
of the person understands Chinese. That a person could carry out rules
elaborate enough to behave like a Chinese scholar is implausible given the
limitations of human data processing capability. Moreover, AI is not ready
to formulate the required rules.
2. There are some weakly analogous cases, however. Some schizophrenics are said
to have split personalities - the same brain carries out the mental processes
of the several personalities.
3. Searle's problem can be mapped into a purely computer framework. Suppose
a program, like the proposed Advice Taker, formulates what it knows as
sentences of logic and decides what to do by reasoning in this logic. Suppose
further that this program can engage in reasonably intelligent dialog - in
the logical language or even in English. Suppose further that someone gives
the program - in logic or in English - a set of rules for manipulating
pictographs but gives no rules for translating the pictographs into its
basic logical language. Suppose further that the rules again amount to
the behavior of a Chinese scholar. Then the interpreted system knows
Chinese and many facts about Chinese literature while the basic system
doesn't.
Actually, the same problems would arise if the interpreted language were English,
except that the basic system would have to be rather obtuse to obey the
rules given in English for manipulating quoted strings of characters without
noticing that these strings could themselves be interpreted in its base language.
Searle's paper points up the fact that a given brain could be host to several
minds. These minds could either be parallel in the brain in that the %2hardware%1
interprets them separately or they can be organized in layers - where
one mind knows how to obey the rules of another with or without being able to
translate the other's sentences into its own language.
If such a phenomenon were common, ordinary language would not make the "category
mistake" of saying "John saw himself in the mirror", since it would not
identify the personality John with the physical object. Since the phenomenon
probably doesn't actually occur except with computers, ordinary language users make
this mistake only in connection with computers and officialdoms. The
statement "The computer knows X" or "The government knows X" often elicits
the reply "What program are you talking about, do you mean the personnel
program that has just been moved from an IBM 370/168 to an IBM 3033?" or "What
agency do you mean?".
I turn now to the interpolation
on understanding. In my paper %2Ascribing Mental Qualities to
Machines%1, I advocate ascribing limited beliefs even to thermostats,
even though it doesn't contribute to understanding them. (It might
for a person with a sufficiently limited knowledge faced with
a slightly complex thermostatic system). I regard some of the
arguments against such ascription as resembling proposals to
begin the number system with 2 on the grounds that if we only
had to consider sets with 0 and 1 elements, we wouldn't have
any real use for numbers. The answer is that starting with
2 would give the general laws dealing with numbers a less
comprehensible form. Indeed for a long time 0 was mistakenly omitted from
the number system on the grounds that it wasn't really a number.
My intuition is that a good theory of belief will also admit
trivial cases that wouldn't in themselves justify the theory
but which form the base case of inductive arguments in the theory.
I agree with Newell and Simon that AI should try to make programs
that understand in exactly the same sense as humans do, but I
am skeptical about the specifics.
%3It would be extremely helpful if Searle could put some
of his intuitions about the inability of machines to understand
into a form that challenges the possibility of a specific performance.
Is this possible? I found John Haugeland's challenges to AI very
helpful in developing technical problems%1.
19 I agree that this is a good question to ask but take the other
side.
After all the fulminations of the remainder of the paper,
I am still uncertain whether Searle thinks a computer program
for the PDP-60 written by (say) Roger Schank IV could in principle leave
John Searle III undecided whether a man or a machine
was at the other end of the teletype line, giving John Searle III
repeated tries and permission to consult with experts. I am also
undecided as to whether success in this would count as true
intelligence.